Neural-Chat-v3-3 is a 7-billion-parameter large language model developed by Intel based on the Mistral-7B architecture, focusing on mathematical reasoning and text generation tasks. The model is fine-tuned on the MetaMathQA dataset and aligned using Direct Performance Optimization (DPO) method.
Large Language Model
Transformers